Search Results: "Roland Mas"

29 September 2011

Roland Mas: Non-FusionForge, September 2011

Besides FusionForge stuff, I also kept busy with other things.

Roland Mas: Fusionforge, September 2011

A summary of my FusionForge-ish activities this month:

25 August 2011

Roland Mas: Hack of the day: wondershaper vs. SSH shared connections vs. SCP

We all hate the A in ADSL, but most of us are stuck with it. So there are any number of workarounds that keep our link to the Internet working with a certain amount of perceived fluidity, by way of "traffic shaping". Many of us use wondershaper for that purpose: it's a magic tool that keeps the latency low for interactive traffic by ensuring that the bulk traffic doesn't interfere too much with it. We all hate the A in ADSL, but most of us are stuck with it. So there are any number of workarounds that make the establishment of new connections to external servers feel faster. Many of us use the SSH "ControlMaster" series of options for that purpose: it's a set of options that allow new connections to a server to reuse an existing one if there is already a connection open (or if there was one until not too long ago), so some time is saved becaused the handshake to establish the connection is only needed once. Now combine both. SSH to an external server, for instance, work on it. Since it's meant to be an interactive connection, slogin will set some appropriate flags on the IP packets, and wondershaper will give these packets priority over whatever bulk traffic is happening at the time. At some point, you realise you need to transfer a large set of files to there, so you start an SCP transfer. scp, being part of the SSH suite, will reuse the existing connection. Suddenly, your bulk transfer will not only get priority over the other kinds of traffic you might have, but also get intermingled with your keystrokes in the slogin session to the same server. Which means your keystrokes will be delayed significantly, and working interactively becomes even more of a pain. In my case, the priority boost given to the SCP transfer is sometimes enough to cause other unrelated sockets to time out. After reading some docs, I found my solution. It isn't much, but it's going to change my way of working significantly enough that I'm going to share it anyway.
alias scp='scp -oControlPath=none'
Also, since I use dput extensively, and it invokes scp in many cases without reusing my shell aliases, part of my ~/.dput.cf file reads:
[DEFAULT]
ssh_config_options = ControlPath=none
[ ]
This means I keep the advantages of traffic shaping for interactive sessions, but SCP stuff won't interfere anymore, and I can continue working on the server while the new version of the packages that I need to get there are in transit. One small improvement at a time, progress!

21 July 2011

Raphaël Hertzog: People behind Debian: Martin Michlmayr, former Debian Project Leader

Martin Michlmayr is a Debian developer since 2000 and I share quite a few things with him, starting with his age and involvement in the quality assurance team. He managed to be elected Debian Project Leader in 2003 and 2004. He s no longer as active as he used to be but his input is always very valuable and he continues to do very interesting things in particular concerning the support of NAS devices. Read on for the details. Raphael: Who are you? Martin: I m Martin Michlmayr. I m 32, originally from Austria, and currently living in the UK. I ve contributed to various free software projects over the years but Debian is without doubt the one I m most passionate about. I joined Debian in 2000 when I was a student. I worked on Debian more or less full time for a few years while I was pretending to study. Later I started a PhD to do research about quality and management aspects of volunteer free software projects. I investigated the release process in several free software projects, in particular looking at time-based releases. After finishing my PhD in 2007, I joined Hewlett-Packard. I m part of HP s Open Source Program Office and work on various free software and open source activities, both internally and within the community. Raphael: How did you start contributing to Debian? Martin: I first used Debian in the days of 0.93R6, some time around the end of 1995. The 0.93R6 release was still based on a.out but I needed an ELF-based system for some application, so I moved to Slackware. I then quickly moved to Red Hat Linux where I stayed for several years. I rediscovered Debian in 2000 and quickly decided to join the project. I cannot recall how I rediscovered Debian but when I did, it was clear to me that Debian was the ideal project for me: I could identify with its philosophy, I liked the volunteer nature of the project, and I found the size and diversity of Debian interesting since a large project offers a lot of different challenges and opportunities. I remember how many new things there were to learn and back then the documentation and other resources for new contributors were nowhere as good as they are today. My application manager, Julian Gilbey, was a great help he was incredibly friendly and passionate about Debian. I also remember meeting up with Peter Palfrader (weasel) for key signing when we were both in the New Maintainer queue. I was incredibly lucky with my New Maintainer process and soon became an official Debian Developer. Because there was a shortage of application managers, my first major contribution in Debian was to become an application manager myself and help other people join the project. Debian is a large project with a long history and a rich culture, so new contributors should expect that it will take some time to become familiar with everything. Fortunately, there are many resources, such as documentation and the debian-mentors list, for new contributors. Another great way to become familiar with the way things are done in Debian is to subscribe to various Debian mailing lists and ideally to read some mailing list archives. It s also a great idea to attend the Debian Conference or other conferences since meeting people in real life is a great way to integrate. I remember attending Debian Conference 1 in Bordeaux where I gave my first public talk. Finally, new contributors should find an area where they can make a unique contribution. Most people in Debian maintain packages but there are so many other ways to contribute. For example, most of my contributions were not technical but were about coordination and other organizational activities. Raphael: What s your biggest achievement within Debian? Martin: I m particularly proud of a number of achievements: Raphael: Speaking about NAS devices: what exactly are you doing on this topic and how can people help? Martin: There are plenty of instructions on the Internet to install Linux distributions on NAS or various embedded devices by connecting a serial console and then typing in hundreds of commands. What I found is that such instructions significantly limit the user base because they are way too complicated for most users. There are just too many steps that can go wrong. So instead, in Debian, we provide a solution that just works: usually, you download a firmware image for your NAS device from Debian and when you upgrade you get the Debian installer. You connect to the installer via SSH and perform a normal installation. The installer knows about the device and will prepare everything for you automatically for example, it knows if the device has requirements for the partition layout and it will install the kernel where the device expects to find it; unfortunately, NAS devices are not like PCs, so the requirements are different for almost every device and therefore you need special code to support a new device. Finally, there are detailed installation guides and we provide help on our mailing lists. There are a number of technical areas for improvement. The installation could be made even easier, and it would be nice to support new platforms and devices. A bigger problem is that while we ve implemented a great solution for NAS devices, we haven t really extended this work to support other classes of devices. For example, tablets and mobile phones are getting incredibly popular and we don t have a compelling solution for such devices, mostly because of the lack of an appropriate GUI. Raphael: What are your plans for Debian Wheezy? Martin: I ve recently been asked by Stefano Zacchiroli, our current Debian Project Leader, to coordinate the care-taking of Debian finances. Debian, as a volunteer project, relies on donations and in-kind gifts (e.g. hardware) to maintain its infrastructure and to support various development efforts, such as funding sprints and other developer gatherings. Debian s money and other assets are held by affiliate organizations around the world. My responsibility will be to keep track of money and other assets (e.g. hardware and trademarks), work with the DPL to establish procedures related to the use of Debian s assets, and make sure that the procedures are followed. Finally, we want to publish public statements so our donors know how we use their donations to further improve Debian. I just started working on this and this will be my main activity in Debian in the coming months. Raphael: Speaking of money, I plan to run a fundraising to get the Debian book I wrote with Roland Mas translated (cf. http://debian-handbook.info). Is this something Debian should support? Martin: First of all, I should make it clear that I don t decide how Debian spends its money. This is up to the DPL to decide together with the project at a whole. I ll just make sure that procedures are followed and expenses tracked and reported properly. Having said that, in my opinion, it s unlikely that Debian as a project will fund this effort. It would be inconsistent with the position of the project not to fund work directly (only some related expenses, such as travel costs to allow Debian teams to organize face-to-face meetings). Whether Debian should support the fundraising effort by helping to promote it is another question and that s probably not as clear cut. It looks like a worthwhile effort, but on the other hand it would be unfair for authors of other Debian books for Debian to put its weight behind one and there are many other efforts that are worth promoting if you promote one, where do you stop? So while it sounds worthwhile, it s probably better for Debian to stay out of it. But somehow related to this, I sometimes worry about the fact that there are so few paid opportunities around Debian. If you contribute to the Linux kernel for a while, you have an excellent chance to get hired by someone and to work on the kernel full time. The kernel may be an extreme example but there are a lot of projects that have more paid opportunities than Debian, e.g. Mono, GNOME, OpenOffice/LibreOffice and KDE. Obviously, there are some Debian Developers who can spend some time on Debian as part of their job. I know that some Canonical employees contribute to Debian, that support companies like credativ improve Debian as part of their work, and that system administrators fix bugs or package new software as they deploy Debian. But I think this is a minority of contributors and even they don t work full time on Debian. Instead what I see is that a lot of people leave university, get a job and then no longer have time for Debian or people start a family and no longer have time. I can take myself as an example since I don t have nearly as much time as I did in the past when I was a student. I guess there are different ways to deal with this problem one would be to create more paid opportunities around Debian outside the project, another one might be to make it easier for new volunteers to join the project. I don t have the answers to these questions but it s something I wonder about, and I also wonder whether pure volunteer projects can still keep up with projects with a lot of full time contributors. Raphael: What motivates you to continue to contribute year after year? Martin: Debian is a great project with a great mission, goals and people. I contribute to make Debian a better solution and to promote the free software philosophy. Finally, the community around Debian provides a lot of motivation. It s amazing how much I ve learned about other cultures because of my involvement in Debian and how many friends I ve made over the years all around the world. Raphael: Do you have wishes for Debian Wheezy? Martin: Not really. I m pretty happy with the way things are going at the moment. We have made a lot of organizational changes in the last few years from which the project has greatly benefited. I m particularly pleased about the plans to adopt a time-based freeze. Raphael: Is there someone in Debian that you admire for their contributions? Martin: There are many people I admire greatly. I d like to mention Joey Hess because he s a great example to follow. He doesn t get involved in politics, is easy to work with and does great technical work. In fact, he has made not one but several contributions that have completely changed Debian (debconf, debhelper, and debian-installer). Furthermore, Debian has a lot of contributors who have done great work over the years but who are not very vocal about it. People like Colin Watson or Peter Palfrader. Debian has many unique contributors and the list of people I admire is much longer than the few people I just mentioned.
Thank you to Martin for the time spent answering my questions. I hope you enjoyed reading his answers as I did. He raised some interesting questions. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

No comment Liked this article? Click here. My blog is Flattr-enabled.

3 May 2011

Raphaël Hertzog: My Debian activities in April 2011

This is my monthly summary of my Debian related activities. If you re among the people who support my work, then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. GNOME 3 packaging Right after the GNOME 3 release, I was eager to try it out so I helped the pkg-gnome team to update some of the packages. I did some uploads of totem, totem-pl-parser, gvfs, mutter, gnome-shell, gnome-screensaver. I also kept people informed via my blog and prepared a pinning file for adventurous users who wanted to try it out from experimental (like me). One month later, I m still using GNOME 3. There are rough edges still, but not so many. And I m starting to get used to it. Debian Rolling planning Debian Rolling is a project on my TODO list for quite some time. I decided it was time to do something about it and started a series of articles to help clarify my ideas while getting some early feedback. My goal was to prepare a somewhat polished proposal before posting it to a Debian mailing list. But as usual with Murphy s law, my plan did not work out as expected. Almost immediately after my first post the discussion started on debian-devel:
At this point it s a discussion thread of several hundreds of messages (there are several screens of messages like the one above). Many of the sub-threads have been interesting, but the general discussions mixed too many different things so that there s no clear outcome yet. Lucas Nussbaum tried to make a summary. Obviously I must adjust my plan, there s lots of feedback to process. I accepted to drive a DEP together with Sean Finney to help structure the part of the discussion that focuses on allowing development to continue during freezes. But I m also eager to fix the marketing problem of testing and have the project recognize that testing is a product in itself and that end-users should be encouraged to use it. Package Tracking System maintenance The Package Tracking System is an important tool for Debian developers, and it has been broken by some change on the Bug Tracking System. I worked around it quite quickly so that few people noticed the problem but Cron kept reminding me that I had to properly fix it. I ended up doing it last week-end. While working on the PTS, I took the opportunity to merge a patch from Jan Dittberner to enhance the news RSS feed that the PTS provides. And I also integrated information from backports.debian.org (thanks to Mehdi Dogguy for reminding me #549115). Multiarch update Not much new this month. I fixed two bugs in the multiarch dpkg branch thanks to bug reports from Ubuntu users (LP 767634, LP 756381). I m still waiting on Guillem Jover finishing his review of the multiarch branch. I m pinging him from time to time but it looks like multi-arch is no longer in his short term priority list. :-( I ve been running this code for more than 2 months and it works fine. I want to see it merged. I m ready to update my code should anything need to be changed to please Guillem. But without any feedback we re in a deadlock. Misc dpkg work While fixing a bug in update-alternatives (found in one of the valid reports on launchpad), I noticed that there was room for improvements in the error messages output by update-alternatives. I changed them to reuse the same strings that were already used in other parts of dpkg. The result is that there are a few strings less to translate (always a nice thing for the poor translators who have to deal with the thousands of strings that dpkg contains). I also tried to fix some of the most cryptic error messages in dpkg (see #621763) but that work is stalled at the request of Guillem. Book update We (me and Roland Mas) are almost done with the update of our French book for Debian Squeeze. It will hit the shelves in July or September. I m starting to prepare the fundraising campaign to make an English translation of it. We ll use ulule.com for this. On my blog I have been pleased to interview Meike Reichle, it s the first women that I have interviewed in the series but it s certainly not the last one. I also interviewed Adam D. Barratt, one of our tireless release managers. Thanks Many thanks to the people who gave me 180.35 in March and 235.37 in April. That represents 1.5 and 2 days of work for those months. See you next month for a new summary of my activities.

8 comments Liked this article? Click here. My blog is Flattr-enabled.

11 April 2011

Raphaël Hertzog: Journey of a new GNOME 3 Debian packager

With all the buzz around GNOME 3, I really wanted to try it out for real on my main laptop. It usually runs Debian Unstable but that s not enough in this case, GNOME 3 is not fully packaged yet and it s only in experimental for now. I asked Josselin Mouette (of the pkg-gnome team) when he expected it to be available and he could not really answer because there s lots of work left. Instead Roland Mas gently answered me Sooner if you help . :-) First steps as a GNOME packager This is pretty common in free software and for once I followed the advice, I spent most of sunday helping out with GNOME 3 packaging. I have no prior experience with GNOME packaging but I m fairly proficient in Debian packaging in general so when I showed up on #debian-gnome (irc.debian.org) on sunday morning, Josselin quickly added me to the team on alioth.debian.org. Still being a pkg-gnome rookie, I started by reading the documentation on pkg-gnome.alioth.debian.org. This is enough to know where to find the code in the SVN repository, and how to do releases, but it doesn t contain much information about what you need to know to be a good GNOME packager. It would have been great to have some words on introspection and what it changes in terms of packaging for instance. Josselin suggested me to start with one of the modules that was not yet updated at all (most packages have a pre-release version usually 2.91 in experimental, but some are still at 2.30). Packages updated and problems encountered (You can skip this section if you re not into GNOME packaging) So I picked up totem. I quickly updated totem-pl-parser as a required build-dependency and made my first mistake by uploading it to unstable (it turns out it s not a problem for this specific package). Totem itself was more complicated even if some preliminary work was already in the subversion repository. It introduces a new library which required a new package and I spent a long time debugging why the package would not build in a minimalistic build environment. Indeed while the package was building fine in my experimental chroot, I took care to build my test packages like the auto-builders would do with sbuild (in sid environment + the required build-dependencies from experimental) and there it was failing. In fact it turns out pkg-config was failing because libquvi-dev was missing (and it was required by totem-pl-parser.pc) but this did not leave any error message in config.log. Next, I decided to take care of gnome-screensaver as it was not working for me (I could not unlock the screen once it was activated). When built in my experimental chroot, it was fine but when built in the minimalistic environment it was failing. Turns out /usr/lib/gnome-screensaver/gnome-screensaver-dialog was loading both libgtk2 and libgtk3 at the same time and was crashing. It s not linked against libgtk2 but it was linked against the unstable version of libgnomekbdui which is still using libgtk2. Bumping the build-dependency on libgnomekbd-dev fixed the problem. In the evening, I took care of mutter and gnome-shell, and did some preliminary work on gnome-menus. Help is still welcome There s still lots of work to do, you re welcome to do like me and join to help. Come on #debian-gnome on irc.debian.org, read the documentation and try to update a package (and ask questions when you don t know). Installation of GNOME 3 from Debian experimental You can also try GNOME 3 on your Debian machine, but at this point I would advise to do it only if you re ready to invest some time in understanding the remaining problems. It s difficult to cherry-pick just the required packages from experimental, I tried it and at the start I ended up with a bad user experience (important packages like gnome-themes-standard or gnome-icon-theme not installed/updated and similar issues). To help you out with this, here s a file that you can put in /etc/apt/preferences.d/gnome to allow APT to upgrade the most important GNOME 3 packages from experimental:
Package: gnome gnome-desktop-environment gnome-core alacarte brasero cheese ekiga empathy gdm3 gcalctool gconf-editor gnome-backgrounds gnome-bluetooth gnome-media gnome-netstatus-applet gnome-nettool gnome-system-monitor gnome-system-tools gnome-user-share baobab gnome-dictionary gnome-screenshot gnome-search-tool gnome-system-log gstreamer0.10-tools gucharmap gvfs-bin hamster-applet nautilus-sendto seahorse seahorse-plugins sound-juicer totem-plugins remmina vino gksu xdg-user-dirs-gtk gnome-shell gnome-panel dmz-cursor-theme eog epiphany-browser evince evolution evolution-data-server file-roller gedit gnome-about gnome-applets gnome-control-center gnome-disk-utility gnome-icon-theme gnome-keyring gnome-menus gnome-panel gnome-power-manager gnome-screensaver gnome-session gnome-settings-daemon gnome-terminal gnome-themes gnome-user-guide gvfs gvfs-backends metacity mutter nautilus policykit-1-gnome totem yelp gnome-themes-extras gnome-games libpam-gnome-keyring rhythmbox-plugins banshee rhythmbox-plugin-cdrecorder system-config-printer totem-mozilla epiphany-extensions gedit-plugins evolution-plugins evolution-exchange evolution-webcal gnome-codec-install transmission-gtk avahi-daemon tomboy network-manager-gnome gnome-games-extra-data gnome-office update-notifier shotwell liferea epiphany-browser-data empathy-common nautilus-sendto-empathy brasero-common
Pin: release experimental
Pin-Priority: 500
Package: *
Pin: release experimental
Pin-Priority: 150
The list might not be exhaustive and sometimes you will have to give supplementary hints to apt for the upgrade to succeed, but it s better than nothing. I hope you find this useful. I m enjoying my shiny new GNOME 3 desktop and it s off for a good start. My main complaint is that hamster-applet (time tracker) has not yet been integrated in the shell.

21 comments Liked this article? Click here. My blog is Flattr-enabled.

26 January 2011

Raphaël Hertzog: My Debian related goals for 2011

Like last year, here s a list of Debian related projects that I d like to tackle this year. I might not do them all, but I like to write them down, I am more productive when I have concrete objectives.
  1. Translate my Debian book into English.
    I will run a fundraising campaign with Ulule.com and if enough people are interested, I will spend a few months with Roland Mas to translate the book. Hopefully this project can be completed until the summer.
  2. Finish multiarch support in dpkg.
    I m working on this with Guillem Jover right now, thanks to Linaro who is sponsoring my work.
  3. Make deb files use XZ compression by default.
    It s a simple change in dpkg-deb and it would literally save gigabytes of space on mirrors. It s particularly important so that single CD/DVD can have a more complete set of software. #556407 (on DAK) needs to be fixed first though and a project-wide discussion is in order. Some archs might want to have a different default.
  4. Be more reactive to review/merge dpkg patches.
    While we welcome help, we have a backlog of patches sitting in the BTS and it happened several times that we failed to review/merge submitted branches in a decent time. It s very frustrating for the contributor. I already tried to improve the situation by creating the Review/Merge Queue but nobody stepped up to help review and update the patches.
    As I am getting more familiar with the C part of dpkg, I hope to be able to jump in when Guillem doesn t have the time or the interest.
  5. Implement the rolling distribution proposed as part of the CUT project and try to improve the release process.
    I really want the new rolling distribution but it will inevitably have an impact on the release process. It can t be done as a standalone project. I would also like to see progress in the way we deal with transitions (see discussion here).
  6. Work more regularly on the developers-reference.
    Hopefully I will be able to combine this with my blog writing activities, i.e. write blog articles on the topics that the developers-reference shall cover and then adapt my articles with some docbook markup.
To the above list, I shall add a few supplementary goals related to funding my Debian work:
  1. Write a 10-lesson course called Smart Package Management .
    It will delivered by email to my newsletter subscribers.
  2. Create an information product (most likely an ebook or an online training) and sell it on my blog.
    The precise topic has not yet been defined although I have a few ideas. Is there something that you would like me to teach you? Leave your suggestions in a comment.
  3. By the end of the year, have at least 1/3 of my time funded by donations and/or earnings of my information products.
    More concretely it means 700 each month or a 9-fold increase compared to the current income level (around 80 /month mostly via Flattr).
That makes up lots of challenges for this year. You can help me reach those goals, join my Debian Supporters Guild and you ll be informed every time that I start something new or when you can help in specific ways.

6 comments Liked this article? Click here. My blog is Flattr-enabled.

25 November 2010

Roland Mas: FusionForge news, November 2010

Okay, so it's been five months since the last update. Sorry about that. I guess I could force myself to more regular updates. What have you missed? Only the branching of what will eventually (soon?) become FusionForge 5.1. Quoting from the default home page, and in no particular order: We've also added many tests to our testsuite, and we'd welcome more. If you're interested in the new features coming to FusionForge, now would be a good time to try them and report the bugs you find. And if you don't want to upgrade your production server just yet, you can use a VM image that builds and installs everything automatically inside a VirtualBox (or another virtualization system). See details of my VM image for easy testing mailing-list post. That VM covers trunk rather than the 5.1 branch, but since the branching is still rather recent there isn't much difference yet. Another tidbit of information: since I recently realized the IPv4 horizon is less than a hundred days away, I've been engaging in a flurry of migrations from IPv4-only to dual-stack, one result of which is that fusionforge.org is now accessible via IPv6. Yay for less NAT and reverse-proxying. And I think that's it for now. There's one obvious piece of news I'm looking forward to announce, but it has to actually happen first

6 October 2010

Daniel Kahn Gillmor: monkeysphere and distributed naming

Roland Mas writes an interesting article about decentralized naming, in which he says:
Monkeysphere aims at adding a web of trust to the SSL certificates system, but the CA chain problem seems to persist (although I must admit I'm not up to speed with the actual details).
Since i'm one of the Monkeysphere developers, i figure i should respond! Let me clarify that Monkeysphere doesn't just work in places where X.509 (the SSL certificate system) works. It works in other places too (like SSH connections). And I don't think that the CA chain problem that remains in Monkeysphere is anything like the dangerous mess that common X.509 usage has given us. I do think that at some level, people need to think about who is introducing them to other people -- visual or human-comprehensible representations of public key material are notoriously difficult to make unspoofable. On the subject of distributed naming: OpenPGP already allows distributed naming: every participant in the WoT is allowed to assert that any given key maps to any given identity. Duplicates and disagreements can exist just fine. How an entity decides to certify another entity's ID without a consensus global namespace is a tough one, though. If i've always been known as "John Smith" to my friends, and someone else has also been known as "John Smith" to his friends, our friends aren't actually disagreeing or in conflict -- it's just that neither of us has a unique name. The trouble comes when someone new wants to find "John Smith" -- which of us should they treat as the "correct" one? I think the right answer probably has to do with who they're actually looking for, which has to do with why they're looking for someone named "John Smith". If they're looking for John Smith because the word on the street is that John Smith is a good knitter and they need a pair of socks, they can just examine what information we each publish about ourselves, and decide on a sock-by-sock basis which of us best suits their needs. But if they're looking for "John Smith" because their cousin said "hey, i know this guy John Smith. I think you would like to argue politics over a beer with him", then what matters is the introduction. And OpenPGP handles that just fine -- if their cousin has only ever met a single John Smith, that's the right one. If their cousin has met several John Smiths, then the searcher would do well to ask their cousin some variant of "hey, do you mean John Smith or John Smith ", or even "do you mean the John Smith who Molly has met, or the one who Charles has met?" (assuming that Molly and Charles have each only certified one John Smith in common with the cousin, and not the same one as each other), or to get a real-time introduction to a particular John Smith, where his specific key is somehow recordable by the searcher for future conversations (or beer drinking). This is what we do in the real world anyway. We currently lack good UIs for doing this over the network, but the certification infrastructure is in place already. What we're lacking in infrastructure, though, is a way to have a distributed addressing. Roland's proposal was to publish addresses corresponding to cryptographic identities within some DNS zone, or in freenet or gnutella. Another approach (piggybacking on existing infrastructure) would be to include IP address information in the OpenPGP self-certification, so the holder of the name could claim exactly their own IP address. This could be distributed through the keyserver network, just like other updates are today, and it could be done simply and immediately with a well-defined OpenPGP notation. I'd be happy to talk to interested people about how to specify such a notation, and what possible corner cases we might run into. Drop a note here, or mail the Monkeysphere mailing list or hop onto #monkeysphere on irc.oftc.netTags: authentication, distributed naming, identity, monkeysphere

Roland Mas: Automounting a LUKS-encrypted USB key

You have a computer. You're afraid of it being stolen by baddies or raided by the police, so you've encrypted its hard disk with LUKS. You also want to carry around some of your crypto keys (SSH and/or GPG), or any kind of sensitive data, on an USB key, so you can restore a normal activity in case the computer gets stolen or becomes untrustworthy. You like things that Just Work, including automounting with autofs (no GUI popups, no need to manually unmount and so on). But of course, you can't very well just mount the drive and store your keys on it in the clear. Well, actually you can (I did it for way too long), but let's assume you want to do without. Because USB keys get stolen or lost, too. autofs just handles the mounting of the drives, not their unlocking, and there's no hook in there specifically for LUKS. There is, however, a more generic way of running arbitrary commands on mounting: the program maps. It's all described in the documentation, so I'll just paste the relevant parts of my /etc/auto.removable, with a few comments. Remember to make it executable and reference it in /etc/auto.master.
 #! /bin/sh
 volume=$1
 autoluks ()  
   dev=/dev/$1
   cryptname=$ 1 _crypt
   cryptdev=/dev/mapper/$cryptname
   keyfile=/etc/private/$cryptname.key
   options=$2
   [ ! -e $dev ] && exit 0
   if [ -b $cryptdev ] && ! cryptsetup status $cryptname   grep -q device:.*$dev ; then
       cryptsetup remove $cryptname   logger
   fi
   cryptsetup --key-file $keyfile luksOpen $dev $cryptname   logger
   [ -b $cryptdev ] && echo $options :$cryptdev   true
  
 case "$volume" in
   # LUKS-encrypted volumes
   lacie-keys)
       autoluks lacie-keys -fstype=ext4,ro,noatime,nodev,noexec,nosuid
       ;;
   lacie-keys-rw)
       autoluks lacie-keys -fstype=ext4
       ;;
   lacie-backups)
       autoluks lacie-backups -fstype=ext4
       ;;
   # Non-encrypted volumes
   cd)
       echo -fstype=iso9660,ro,nodev,nosuid            :/dev/cdrom
       ;;
   floppy)
       echo -fstype=auto,sync,nodev,nosuid             :/dev/fd0
       ;;
 esac
The astute reader will have noticed that the keys volume is mounted either read-only or read-write with the same mechanism. This volume contains SSH keys that I use quite regularly, that filesystem is going to be mounted and unmounted very often. USB keys, and flash memory in general, don't really like repeated writes. Fortunately, the SSH keys are mostly read, and very rarely written to, so I can afford to mount the partition read-only and save the write cycles for when I generate new keys (the mere action of mounting a partition read-write changes it, even if the files in it are never modified). The backups volume is mostly accessed in write mode, and much less frequently anyway, so there's no need for a distinction there. This supposes that the partitions on the USB key appear as /dev/lacie-keys and /dev/lacie-backups. Due to the dynamic naming of devices, you may like to use rules similar to the following in /etc/udev/rules.d/local.
 KERNEL=="sd[a-z]1", ATTRS idProduct =="1027", ATTRS idVendor =="059f", ATTRS manufacturer =="LaCie", ATTRS product =="LaCie iamaKey", SYMLINK+="lacie-keys"
 KERNEL=="sd[a-z]2", ATTRS idProduct =="1027", ATTRS idVendor =="059f", ATTRS manufacturer =="LaCie", ATTRS product =="LaCie iamaKey", SYMLINK+="lacie-backups"
You'll also need to initialize LUKS on the partitions, save the key, make the filesystems, and so on. If you're still with me, you probably know what I'm talking about, then I don't need to explain about that. This setup works for me, but there's no guarantee and so on. Based on a tutorial found on the Debian Administration website (great resource, by the way). Update: I'm told a large part of the script could be replaced with pmount. If you like short scripts, you may want to investigate that too. Update 2: Josselin points out a different solution using Gnome, udisks, gnome-disk-utility and Nautilus. It's worth checking out, for those who like to run all that and click around. I can't test it right now (Gnome/Nautilus currently refuses to mount any USB key, for some presumably transitional reason), but I have no reason to believe it's inherently broken. I just don't like having to do anything when I plug my key or before I unplug it, and I'm not always running a graphical session with a desktop.

2 October 2010

Roland Mas: For a truly acentric Internet

For some time, I've had this idea brewing for how the 'net would work without DNS. Initially I was mostly thinking of how to get rid of the dark side of the registrar business, but it appears that maybe it could help with security in general too. It may not be feasible at all, or I may have forgotten extremely crucial points. I'm not a DNS guru, so bear with me and consider this as a thought experiment. This is, however, a proposal for how the Internet could be made truly decentralized, or acentric. Made so, because it currently is very centralized on at least two counts: the DNS is an inherently hierarchical structure of authority; and so is the situation with the SSL certificates used for securing websites. Let's start with DNS, if you will. Basically, the idea is to replace the current Domain Name System, which is centralised in nature (even if subdomains can be delegated) with a fully decentralised system with no central authority. The point of current DNS is to map names to IP addresses in a unique way. It currently works by registering a this name is managed by such-and-such server, ask for details at such-and-such IP address message. Of course, since there's only one namespace (or a limited number thereof, if you consider that TLDs are different namespaces), there needs to be a central authority to resolve conflicts, because some names will be more popular than others. I propose to start with a cryptographic key pair (think GPG key). Anyone can create their key pair; we assume that it's extremely unlikely for two people to generate the same keys, and that it's also very hard for two people to generate keys with the same fingerprint. By (being the only one) knowing the secret part of a key, I am the authority on what can be signed with it. I could for instance sign records that say that www points to this set of IP addresses, and smtp points to that other address. Within my area of authority, which is to say within my domain. Which is identified by the fingerprint of my key pair. What do we have so far? A mechanism that allows in a decentralised way to verify who's authoritative on the F5566852EB92BD779CF137190EA756B5144843F5 domain . In DNS terms, more probably F55...3F5.some-tld. Now let's publish the signed records on a decentralised peer-to-peer network. I'm not up to date with the latest trends in that domain, but I think something like Gnutella or Freenet would qualify, since it doesn't need central hubs but only an initial list of nodes, and it does automatic discovery of new nodes. Projects such as the FreedomBox (name likely to be changed) will probably help with that. Anyway, when looking up www.F55...3F5.some-tld, the resolver gets the signed record (and the public part of the key) from the P2P network, checks the signature, and uses the contents of the record to determine the IP address. No central or delegated registry involved at any point. Lesser risk of a single point of failure, which means better resilience against outages. Also, a better resilience against DNS-based censorship, be it promoted by non-democratic states or by greedy corporations. Of course, F5566852EB92BD779CF137190EA756B5144843F5 isn't a terribly easy name to remember or transmit. But the string of hex could be presented to the user in some other form easier to recognize for people without geek super-powers. My favourite would be the bibi-binary representation, but I'm told that the bubble-babble method works equally well. It would still be unreasonable to expect people to remember such names, and especially to type them into their web browsers or e-mail agents. A possible solution to that would be to use a short form, such as 144843F5 (or its bibi-binary equivalent), complemented by a visual representation of the full fingerprint such as the one SSH does. That pair could be displayed on business cards, email signatures, glossy brochures, advertisements and so on, and allow users to just type the strange word or flash the 2D barcode, check that the funny picture matches (in case the strange word returns several results), and get the website they want. Which they can of course bookmark under a name they can remember, probably even the same name that everyone else uses to bookmark that website. And nothing precludes convenient directories mapping names to full URLs (like search engines currently do) as long as users check that the visual representation of the site they go to matches what they've seen elsewhere. That elsewhere could, of course, take advantage of the natural web of trust that emerges in of GPG-using communities. If I'm directed at a website whose key I don't immediately recognize, then I can see who trusts that key to be legitimate (by checking the signatures), and who trusts them, and so on. Ultimately, the decision is mine to make, and mine only. In any case, once I know what URL I want to visit, I can be sure this DNS replacement will give me the right IP address (or addresses). DNS servers replying with advertisement sites instead of correct ones, and DNS-based filtering, will be things of the past. Of course, having the right IP address only solves the attacks on the DNS part. Eavesdropping can also happen by diverting traffic at the IP level. But we have a trustable way to know information related to a domain, so let's use it: the signed record can also contain the fingerprint of cryptographic keys (SSL or SSH or whatever). So we can be sure we're actually talking to the correct server, rather than to some man-in-the-middle spy. With the same guarantees of correctness as the ones for the DNS, backed by the web of trust and out-of-band validation. Again, we get rid of the hierarchy of certification authorities , some of which have been known for helping organizations get at the (supposedly secret) data people exchange over the net. To summarize: with this scheme, if I know the site I want to access is F55...3F5, then I can be certain that a) I get the right IP address for it and b) I'm actually talking to it. And nobody in-between can intercept the traffic, if the site is properly administrated and its private key is kept secure. And from the site admin's point of view, there's no need to trust DNS registrars and SSL certification authorities not to fool around with the data, or blackmail me into paying increasing fees for keeping my URL working or my SSL key certified. There's still the problem of the allocation for the IP addresses themselves, but two major parts of the centralized and hierarchical net can be gotten rid of. There are proposals attempting to fix part of the problems already, but they're incomplete. DNSSEC only solves the authentication of DNS records, not the centralisation problem. Monkeysphere aims at adding a web of trust to the SSL certificates system, but the CA chain problem seems to persist (although I must admit I'm not up to speed with the actual details). This is of course only a rough draft, full of technical considerations yet certainly not exhaustive. I've thought about it though, and I believe there are no theoretical problems with the implementation of such a scheme (apart for the handling of expiration and revocations, probably). The main obstacle to mass-adoption I can envision is the UI part, and the entrenched habit of just typing a keyword into a search engine and implicitly trusting its results, then blindly trusting the website itself; but even if mass-adoption isn't reached, I can't see anything wrong with giving those who want to be careful a more secure Internet.

8 September 2010

Roland Mas: Gnus, Dovecot, OfflineIMAP, search: a HOWTO

A long time ago, when I was first introduced to email, I was using the Mail program from Unix. I quickly converted to Elm, then Mutt, which were both better in terms of interface. Then I found out about Gnus, and I wouldn't dream of letting it go now. However, Gnus has started showing its age several times, and several times have I needed to upgrade the way I was using it: first because I needed to sort and split email, then because I took the sorting out of Gnus and into Procmail for more advanced filtering (including spam filtering), then because I switched to storing the emails on an IMAP server so I could read them remotely from several computers. My setup as of a few days ago was functional, but since I have grown over time to splitting emails into several hundred folders, checking for new messages was becoming more and more boring. So it's time to jump in with all the cool kids and switch to a modern solution: still Gnus of course, but with Dovecot, OfflineIMAP for synchronisation, and let's add email searches into the mix while we're at it. My web searches didn't turn up a simple step-by-step HOWTO, but I assembled bits from different places, and here's my attempt at documenting my new setup. Goals Assumptions Dovecot setup We'll use Dovecot as a local IMAP server. And since we're lazy, we'll access it over a pipe, and dispense with the network part. OfflineIMAP setup OfflineIMAP is basically an optimised two-way synchronisation mechanism between two email repositories . We'll use it in IMAP-to-IMAP mode.
 [general]
 accounts = MyAccount
 pythonfile = .offlineimap.py
 [Account MyAccount]
 localrepository = LocalIMAP
 remoterepository = RemoteIMAP
 # autorefresh = 5
 # postsynchook = notmuch new
 [Repository LocalIMAP]
 type = IMAP
 preauthtunnel = MAIL=maildir:$HOME/Maildir /usr/lib/dovecot/imap
 holdconnectionopen = yes
 [Repository RemoteIMAP]
 type = IMAP
 remotehost = mail.example.com
 remoteuser = jsmith
 remotepass = swordfish
 ssl = yes
 nametrans = lambda name: re.sub('^INBOX.', '', name)
 # folderfilter = lambda name: name in [ 'INBOX.important', 'INBOX.work' ]
 # folderfilter = lambda name: not (name in [ 'INBOX.spam', 'INBOX.commits' ])
 # holdconnectionopen = yes
 maxconnections = 3
 # foldersort = lld_cmp
 # Propagate gnus-expire flag
 from offlineimap import imaputil
 def lld_flagsimap2maildir(flagstring):
   flagmap =  '\\seen': 'S',
              '\\answered': 'R',
              '\\flagged': 'F',
              '\\deleted': 'T',
              '\\draft': 'D',
              'gnus-expire': 'E' 
   retval = []
   imapflaglist = [x.lower() for x in flagstring[1:-1].split()]
   for imapflag in imapflaglist:
       if flagmap.has_key(imapflag):
           retval.append(flagmap[imapflag])
   retval.sort()
   return retval
 def lld_flagsmaildir2imap(list):
   flagmap =  'S': '\\Seen',
              'R': '\\Answered',
              'F': '\\Flagged',
              'T': '\\Deleted',
              'D': '\\Draft',
              'E': 'gnus-expire' 
   retval = []
   for mdflag in list:
       if flagmap.has_key(mdflag):
           retval.append(flagmap[mdflag])
   retval.sort()
   return '(' + ' '.join(retval) + ')'
 imaputil.flagsmaildir2imap = lld_flagsmaildir2imap
 imaputil.flagsimap2maildir = lld_flagsimap2maildir
 # Grab some folders first, and archives later
 high = ['^important$', '^work$']
 low = ['^archives', '^spam$']
 import re
 def lld_cmp(x, y):
   for r in high:
       xm = re.search (r, x)
       ym = re.search (r, y)
       if xm and ym:
           return cmp(x, y)
       elif xm:
           return -1
       elif ym:
           return +1
   for r in low:
       xm = re.search (r, x)
       ym = re.search (r, y)
       if xm and ym:
           return cmp(x, y)
       elif xm:
           return +1
       elif ym:
           return -1
   return cmp(x, y)
The first part of this file adds a new flag that OfflineIMAP will propagate back and forth. By default, only the standard IMAP flags are propagated; we also want to synchronize the gnus-expire flag that Gnus uses to mark expirable articles. It's a hack, but it works for now (maybe someday OfflineIMAP will propagate all the flags it finds?). The second part of that file can be dispensed with (and won't be used unless the foldersort option is uncommented in .offlineimaprc): it's only there to ensure that some important folders are propagated first, and some others go last. I don't know exactly how they are sorted by default, but I'd like the most important ones to come first, so I can start reading them while the archives and the spam are still being fetched. Gnus configuration
(require 'offlineimap)
(add-hook 'gnus-before-startup-hook 'offlineimap)
Bonus: email searches The simple way:
(require 'nnir)
This goes in your .gnus. Then your group buffer will get a new command. Mark some folders with #, then M-x gnus-group-make-nnir-group (or use the G G shortcut), and type in a set of keywords. This search is performed by the IMAP server (Dovecot), which may or may not be very efficient, especially if you select many folders. Bonus+: email searches, faster The real cool kids use Notmuch nowadays, at least for email indexing and searching. It's fast, it allows complex queries, and it's generally cool. The downside is that it uses up quite some disk space for its indices, in addition to the actual emails. For that reason I'll keep it to my main computer, and I'll stick to nnir on my laptop (which has the same setup apart from that).
(require 'notmuch)
(add-hook 'gnus-group-mode-hook 'lld-notmuch-shortcut)
(require 'org-gnus)
(defun lld-notmuch-shortcut ()
(define-key gnus-group-mode-map "GG" 'notmuch-search)
)
(defun lld-notmuch-file-to-group (file)
"Calculate the Gnus group name from the given file name.
"
(let ((group (file-name-directory (directory-file-name (file-name-directory file)))))
  (setq group (replace-regexp-in-string ".*/Maildir/" "nnimap+local:" group))
  (setq group (replace-regexp-in-string "/$" "" group))
  (if (string-match ":$" group)
      (concat group "INBOX")
    (replace-regexp-in-string ":\\." ":" group))))
(defun lld-notmuch-goto-message-in-gnus ()
"Open a summary buffer containing the current notmuch
article."
(interactive)
(let ((group (lld-notmuch-file-to-group (notmuch-show-get-filename)))
      (message-id (replace-regexp-in-string
                   "^id:" "" (notmuch-show-get-message-id))))
  (if (and group message-id)
      (progn 
  (switch-to-buffer "*Group*")
  (org-gnus-follow-link group message-id))
    (message "Couldn't get relevant infos for switching to Gnus."))))
(define-key notmuch-show-mode-map (kbd "C-c C-c") 'lld-notmuch-goto-message-in-gnus)
Wrap-up This setup can be replicated on several computers, of course. I have it on two, and there's no reason I couldn't have more. The flags do get propagated back and forth, including the Gnus-specific expirable flag. Accessing the local Dovecot is much faster than going through the DSL to the master IMAP server, and I'm pretty convinced that OfflineIMAP and its multi-threading is also faster than Gnus is, even talking to the same remote server. The email searching with Notmuch is a nice bonus, especially since they're fast too (and this despite my 8-year-old computer). There are a few minor glitches. I can live with them, but I should let you know anyway. Apart from that, I'm pretty happy with this new setup. So I hope this documentation will be useful to others, so I can spread the happiness around. Send your thanks to the authors of the software involved (Gnus, Dovecot, OfflineIMAP, offlineimap.el, Procmail, and so on). Let's see how many years I'll keep that system!

25 June 2010

Roland Mas: FusionForge news, June 2010

Another month, another update, but nothing spectacular to be announced in FusionForge-world. We're still working on finishing the transition to the new configuration system, we're testing the migration to a simpler and more flexible set of Apache configuration files, and work is in progress on the RPM packaging. And so on. Possibly the most newsworthy item is the FusionForge presence at next month's Libre Software Meeting in Bordeaux (the RMLL in French). I'll do a FusionForge, one year and a half later talk summarising the status and progress of FusionForge, and there'll also be a *forge devroom where we'll mingle with people interested in all kinds of software forges. Come see us if you're around!

23 May 2010

Roland Mas: FusionForge news, May 2010

The usual semi-regular bits of news from the FusionForge project. We continue being quite active, with several hundred commits each month. The momentum doesn't seem to stop even after the 5.0 release last month. A large part of the activity stems from the Coclico project, which has several work-packages related to convergence of code across forges (mostly between FusionForge and Codendi). This convergence comes in three flavours: All in all, a fairly busy period for FusionForge. The current trunk is evolving rather fast, with some long-overdue rewrites being underway. Interesting times.

29 March 2010

Roland Mas: FusionForge 5.0

Fourteen months after the renaming of the Free/Open Source code of GForge 4.x to the new FusionForge name, we're pleased to announce version 5.0. As mentioned in the release notes, this is still an incremental step over version 4.8 rather than a revolution, but the changes are important enough, and numerous enough, that we felt it justified to bump the major version number. Major improvements, beyond a host of bugfixes, include: FusionForge 5.0 now also includes new plugins that were previously only floating around (or completely private): These plugins, as well as a large part of the improvements in the trackers and the rewritten Mediawiki plugin, are a direct consequence of the upstreaming of work having been done in private instances of forges. We're happy to note that this goal of ours (to merge local patches into the central repository when it makes sense) seems to be working well. For the record, this 5.0 release includes work and plugins that were reintegrated from sources such as Alcatel-Lucent, Adullact and AdaCore. This release is also the first to have had the benefit of automated testing during the whole cycle. Coverage isn't 100 % yet, but the existing unit tests and functional tests help us be confident in the quality of the release. We'll keep adding more tests as time passes, of course. Looking back at the initial goals stated when the project started, we seem to be on the right track: We still need to work on the database schema and the cross-distro part, as well as cross-forge interoperability. The good news is that work is happening on these fronts already. And with almost 2500 commits, we truly seem to have accomplished at least one of the (implicit) goals: to bring development back to a healthy state. And we're far from being out of ideas for the future, so there's a lot of good stuff still cooking!

Roland Mas: FusionForge news, March 2010

Here's another quick update on the status of FusionForge. We released version 4.8.3. Nothing earth-shattering, but a collection of bugfixes that had accumulated on the 4.8 branch. If you're running a patched version, you might want to merge. We also published the second release candidate for 5.0. It's not final yet (there have been a few commits on that branch since then), but we're running out of known bugs. We're currently down to zero open bugs targeting 5.0, so the actual release is probably going to happen in a matter of days. 5.0rc2 is currently available in Debian experimental for those who want to test it, and the final 5.0 will be uploaded to unstable, and hopefully migrate to Squeeze in due time. Stay tuned

20 February 2010

Roland Mas: FusionForge news, February 2010

This is getting old news, and others have blogged about them before I did, but here's my summary of the recent activity in and around FusionForge. The early February meeting was a success, and gathered about twenty people on the first day and a dozen or so on the second day (not planned initially). My impression is that there was a healthy mix of FusionForge hackers, FusionForge users, and people from other forge communities (Codendi, NovaForge, and even one representative from nFORGE, from South Korea). I'm not going to repeat all that was said then, especially since the proceedings are online. Beyond the technical points, I'll just advertise PlanetForge again, since everyone present agreed we had lots to share and that this site would be a good and relatively neutral place. If you're into forges, I recommend joining us in that community. On the purely FusionForge front, news are good too. Most of the major pieces we want to see in the next release (which is probably going to be called 5.0) are in place. The last blocker we had was the merge of the rework of the default theme for better accessibility and easier maintenance and customisability (most of the theming now happens in CSS). This merge has been completed this week, and although there are still a few rough edges, it's mostly done. We'll try to fix most of these rough edges soonish, then start a stabilisation branch towards 5.0, so more experimental work can start again on trunk. For the impatient and the curious, there's a list of new features on the fusionforge.org homepage, and the site is now running code from trunk. Of course, we're eager to get testers for that, which is why I prepared snapshot packages. They are currently stuck in NEW on their way to the official Debian experimental repository due to the renaming of the source package and the introduction of plenty of new binary packages, but they can already be obtained from my unofficial repository at people.debian.org. The packages are built for Debian unstable, but they seem to run just fine on Lenny if you grab mediawiki from backports.org (only required for the Mediawiki plugin, of course), and libnusoap-php and php-htmlpurifier from Debian testing (they don't drag any extra dependencies). I'll end this note by reminding people of the announcement I did three months ago: as of this week, Debian Etch is no longer officially supported security-wise, and so neither is GForge 4.5. As far as I know, I was the last person doing that, and my incentives have gone away on the day Etch ceased to be supported, since it was also the day the Adullact forge finally migrated from Etch with GForge 4.5 to Lenny with FusionForge 4.8. If you're still using 4.5, well I think you should be aware of that. That more or less wraps it up for now. The next announcement is likely to be about a release candidate

26 January 2010

Roland Mas: sgeps follow-up

Just an update about sgeps, because it seems to have made a small stir (which is more than I expected).

22 January 2010

Roland Mas: Simple GnuPG-encrypted password store

I've been accumulating passwords recently. More than I could remember all in one go. I even got worried that I'd locked myself out of one of my own servers recently. So I decided to play it safe and store the passwords somewhere. However, plain text files, even on an encrypted disk, aren't the most secure plan, so I tried to go shopping for a tool that would store passwords in encrypted files and wouldn't be too inconvenient to use. I found a few (pwsafe, keysafe, keepassx, yapet and so on), but they all seem to be either graphical or using their own encryption scheme and (presumably) storage format. Being rather nervous about long-term data accessibility, I thus decided to roll my own script, that would be as simple as possible while doing just the required amount of work. I call the result sgeps, for simple GnuPG-encrypted password store . Note the initial s: I didn't invent any wheel. The code comments should give an idea of the capabilities of sgeps:
  # Usage: sgeps --create                     to create the store
  #        sgeps --add <key>                  to add a key/value to the store
  #        sgeps --list                       to list existing keys
  #        sgeps --add --overwrite <key>      to replace a key/value
I trust both GnuPG and Perl to stay around for quite some time, so hopefully I can forget even the passwords I use very rarely and still be able to recover them later. Even in the event of a hard drive dying, since the encrypted store can now be backed up and burnt on DVDs. I just need to be careful about my GnuPG key. Interested people can grab sgeps from its Bazaar branch with bzr branch http://bzr.debian.org/users/lolando/sgeps/trunk/ or browse it on the web interface. I don't plan to make a Debian package for a hundred lines of Perl code, but if anyone is interested, feel free to include it in an existing package (moreutils maybe?).

15 January 2010

Roland Mas: FusionForge developers/users meeting coming up

News is slow this month on the FusionForge development front. We're all busy gathering all the things that we want to go into the next release, but there's no big news from the code. However, there is something of interest. You may have heard about the Coclico project, which is an initiative aiming at collaboration and convergence between several forge engines, most notably FusionForge, Codendi and Novaforge. That project was started last October, and it holds regular meetings with its members. The next meeting is scheduled for the 2nd of February in Paris, and we thought we could host an open meeting on the 3rd for non-Coclico members, a bit like the forge meeting we had last year (which is when FusionForge was officially born), but with an emphasis on what Coclico did so far. Since most of the FusionForge hackers are in Western Europe, and several are in Paris (especially if we add those who go to Paris for the Coclico meeting), we thought it would also be a good opportunity to gather for a technical and social meeting. It seems the Coclico open session didn't generate much interest this time (at least, it hasn't so far), so I proposed to hijack the room for this FusionForge meeting, and I didn't hear any objections. I have several themes I'd like to discuss with people, and possibly start implementing during that day: and so on. These are in no way specific to FusionForge, and in fact I think it would be great if hackers/users of other forges were present, because we could benefit a great deal from their experience and plans. But if we find ourselves amongst FF people only, I think these would be good to discuss, possibly write some code for, and go home with a clearer picture of where our efforts should focus in the near future. I'd therefore like to invite interested people to mark the 3rd of February on their agendas. The meeting will take place in Issy-les-Moulineaux (near Paris, within reach of the tube). If you're interested, please get in touch with us (#FusionForge on the FreeNode IRC network, or the fusionforge-general mailing-list), so we can have a rough estimate of how many people to expect. The meeting room is provided by France T l com, and they're probably going to need numbers if not names. Further details will be announced when known.

Next.

Previous.